CRUXEval-input: by models

Home   Doc/Code

p-values for model pairs

The null hypothesis is that model A and B each have a 1/2 chance to win whenever they are different, ties are ignored. The p-value is the chance under the null-hypothesis to get a difference as extreme as the one observed. For all pairs of models, this mainly depends on the difference in accuracy. Hover over each model pair for detailed information.

p-values vs. differences

The range of possible p-values vs. the difference in accuracy over all pairs.

Differences vs inconsistencies

Here is a more informative figure of the source information used to compute p-value. Any model pair to the right of the parabola is statistically different from each other at the given level. This plot shows a pretty sharp transition since there are no model pairs with a small #A_win + #B_win, which rules out significant results at a small difference in |#A_win-#B_win|. For more explanation see doc.

Results table by model

We show 3 methods currently used for evaluating code models, raw accuracy used by benchmarks, average win-rate over all other models (used by BigCode), and Elo (Bradly-Terry coefficients following Chatbot Arena). Average win-rate always have good correlation with Elo. GPT-3.5 gets an ELO of 1000 when available, otherwise the average is 1000.

model pass1 std win_rate elo
gpt-4-turbo-2024-04-09+cot 75.7% 0.78% 88.6% 1286.5
gpt-4-0613+cot 75.5% 0.88% 90.0% 1308.7
claude-3-opus-20240229+cot 73.4% 0.00% 86.6% 1252.3
gpt-4-0613 69.8% 0.43% 85.8% 1237.4
gpt-4-turbo-2024-04-09 68.5% 0.43% 85.1% 1229.1
claude-3-opus-20240229 64.2% 0.00% 78.7% 1151.1
gpt-3.5-turbo-0613+cot 50.3% 1.10% 57.0% 986.0
codellama-34b+cot 50.1% 0.93% 58.0% 995.2
codetulu-2-34b 49.2% 0.69% 58.8% 1001.6
gpt-3.5-turbo-0613 49.0% 0.55% 58.7% 1000.0
codellama-13b+cot 47.4% 0.85% 54.7% 979.6
codellama-34b 47.2% 0.71% 54.5% 975.7
phind 47.2% 0.61% 54.9% 973.3
deepseek-base-33b 46.5% 0.71% 53.4% 966.6
deepseek-instruct-33b 46.5% 0.65% 54.3% 967.6
codellama-python-34b 43.9% 0.70% 50.8% 947.6
wizard-34b 42.7% 0.60% 46.8% 917.4
codellama-13b 42.5% 0.76% 45.6% 928.3
deepseek-base-6.7b 41.9% 0.70% 42.7% 901.3
magicoder-ds-7b 41.7% 0.63% 44.8% 911.5
codellama-7b+cot 40.4% 0.95% 40.5% 876.2
codellama-python-13b 39.7% 0.75% 40.4% 882.1
mixtral-8x7b 39.3% 0.75% 39.1% 868.7
deepseek-instruct-6.7b 37.4% 0.60% 36.9% 852.8
codellama-python-7b 37.3% 0.65% 37.1% 864.8
wizard-13b 36.5% 0.60% 35.9% 843.3
codellama-7b 36.0% 0.69% 32.7% 833.6
mistral-7b 35.0% 0.69% 34.0% 840.6
phi-2 31.6% 0.70% 28.2% 794.1
starcoderbase-16b 31.3% 0.70% 26.0% 772.8
starcoderbase-7b 29.7% 0.65% 24.1% 755.9
deepseek-base-1.3b 27.8% 0.60% 21.5% 730.3
deepseek-instruct-1.3b 27.2% 0.55% 23.6% 749.9
phi-1.5 23.2% 0.70% 19.1% 699.0
phi-1 13.1% 0.41% 9.4% 547.1